The Rationalization of Risk

نویسندگان

  • Michael Mayerfeld Bell
  • Diane Bell Mayerfeld
چکیده

Rather than a new feature of modern industrial society, we argue that the much-discussed problem of "risk" represents only a modern conceptual language for discussing the age-old problems of uncertainty and control. What is different about the worries of the present day is not the number of hazards we face or the degree of uncertainty we feel about our lives, but rather it is the language we use to think and talk about them, a language that, in keeping with our time, is highly rationalistic. This rationalism is problematic because it can easily become rationalization. Risk explains uncertainty, and it also explains it away. It gives control and it takes control, and therefore we often feel trapped in an iron cage of risk. We critique that iron cage on several grounds. First, we offer a critique of the rationalization of risk assessment. Second, we critique the risk communication literature and its rationalizations. Third, we critique the recent emergence of risk as a significant theoretical category in environmental social science. Risk, we suggest in these critiques, has some strikingly undemocratic implications and we strongly urge greater caution in its use by social science and by our political systems. The Rationalization of Risk Risk is in the air. Along with terms like "free-market," "globalization," and "virtual reality," the late twentieth century is culturally and politically redolent with "risk." BSE, GMOs, antibiotic resistance. Hazardous waste, nitrates, lead. Radon, fine particulates, smog. Cancer, heart disease, AIDS. Car accidents, plane accidents, train accidents. Chernobyl, the ozone hole, toxic spills. The risks are calculated, debated, and paraded across the front page and TV screen. Corporations and governments hire "risk communication" consultants, and risk communication has become a legitimate field of academic endeavor. Sociologists hold international conferences on risk, there are at least half a dozen academic journals devoted to the study of risk, and new conferences and journals are in the works. Our local university library lists 2680 entries under the heading "risk," and nearly half of them–1310, to be precise–are from 1993 and after. According to one prominent sociologist, Ulrich Beck (1992 [1986], 1996), we are fast becoming a "risk society." But what is risk? And why are we suddenly all talking about it? It is probably not that we live in more perilous times. One does not need to be a Julian Simon or other apologist for the new horrors modern industrialism has unleashed to observe that people in the past surely faced worries and hazards equal to or greater than those of the current day. After all, as Simon (1986, 1990, 1995) was fond of pointing out, a far greater proportion of today’s population is living past the Biblical allotment of three score and ten years. But the issue here is not that life was brutish, nasty, and short in the bad old days, but rather that of uncertainty. The people of the past experienced much uncertainty in their lives, of course, and they sought ways to deal with the unknown: religion, magic, witchcraft, charms around the neck, and rituals performed under a full moon at midnight in a graveyard with a dead cat, such as Mark Twain so wonderfully described in The Adventures of Tom Sawyer. When we speak of uncertainty we speak as well of control. And through rituals with dead cats in the graveyard and sacrifices at the altar, the people of the past found ways to comprehend terror and the unknown and thus to gain a measure of power over it. We are not so different today. We face much uncertainty–many worries, hazards, and scary possibilities that seem beyond our powers to control. Although lives in the modern wealthy West are normally not so short, and perhaps not so brutish and nasty, there remains much that we cannot be sure about, many dark possibilities that we have to ponder. Living longer does not purge us of doubts and dangers. And we still need ways to contend with them. This, then, is what risk is–a modern means, a contemporary conceptual language, for confronting uncertainty. What is different about the worries of the present day is not the number of hazards we face or the degree of uncertainty we feel about our lives, but rather it is the language we use to think and talk about our worries, a language that, in keeping with the spirit of our time, is highly rationalistic. Consider the connotations of the word "risk." Using the term immediately conjures up numbers and calculations in a way that words like hazard and concern and danger do not. Risk is imbued with the image of science, of studies that have been done or could be done. Risk turns witchcraft into statistics. Risk turns subjective uncertainties into objective probabilities, sanctified by the iron laws of mathematical logic and scientific method. Along with the "iron cage" of rationalism, to use Weber’s well-known phrase, came the iron cage of risk. The people of the past did not think that way as much. They had notions of numbers and chance and probability, of course, and they also had the word risk. But the coming of a scientific and moneyed world order developed these notions into defining features of daily life. As Simmel (1990 [1900]: 444-445) observed a century ago, "Gauging values in terms of money has taught us to determine and specify values down to the last farthing....The ideal of numerical calculability has been made possible in practical, and perhaps even in intellectual, life only through the money economy." In English, risk has come to be the word we most associate with the extension of rationalism into our outlook on danger and uncertainty. Nor have the people of the present fully embraced the rationalization of risk. For example, a 1994 poll by the Pew Research Center (Pew, 1998) found some 53 percent of US citizens claim that they pray daily, and 61 percent believe in the power of God to perform miracles. (Indeed, the trend is up; the corresponding numbers in a 1987 Pew poll were 41 and 47 percent, respectively.) We remain fascinated by stories of the occult and by movies that weave magic and graveyard rituals into the plot. The New Age movement has drawn heavily on Wiccan and other traditions in an often stridently non-rationalistic way. The language of risk, however, has become–and remains–the most politically legitimate way for us to discuss and debate life’s dangers and uncertainties. The rationalization of risk was associated with the rise of democracy, not only the spirit of capitalism and money that Weber and Simmel wrote of and the spirit of science that many others have described, and there are some close ideological connections at work between rationalism and democracy. Rationalism appears democratic because of its universalistic claim to apply to the entire demos and because of its objectivist claim to represent the perspective of the entire demos, as determined by science. Rational knowledge is knowledge that is supposed to be everybody’s knowledge, at least potentially. It is unnamed, unidentified with a particular standpoint or culture, applying to all and representing all, and thus apparently democratic. Indeed, the political framework of democracy is increasingly directed by a risk-based regulatory apparatus. Risk is a legal entity that lies at the heart of, to take the United States as an example, the Emergency Planning and Community Rightto-Know Act; the Occupational Safety and Health Act; the Comprehensive Environmental Response, Compensation, and Liability Act; the National Environmental Policy Act; and the Resource Conservation and Recovery Act–all of which mandate risk communication to the public. But while there may be democratic potential in the regulatory apparatus of risk, it is also a highly effective political tool for those who are able to negotiate its language to their advantage. In other words, to speak of risk is not only to speak of control but also power. Risk is the product of rationalization in two senses. It represents the extension of rationalism into our comprehension of the unknown, giving us a sense of control over uncertainty in a way that appeals to the modern mind. It as well represents the legal authority of expertise to order our lives in ways that may advantage the interests behind the experts more than those of us in front of the experts, down in the audience. Risk explains, and it also explains away. It gives control and it takes control, and we often feel trapped in an iron cage of risk therefore. Our goal in this paper is to critique these two rationalizations of risk and to develop a perspective on risk that connects knowledge to power and to identity–to the interests of those who construct our knowledge of risk and to the interests of those who are subject to it. Rather than a feature of the good democratic society, risk strikes us as a dangerously undemocratic concept, at least in its current development and use within Western societies–and, we will argue, within social science. We also contend that the heart of the problem of risk as rationalization is the opposition it sets up between those who supposedly view uncertainty rationally and those who supposedly do not, placing those on opposite sides of the dichotomy on unequal footing in democratic debate. Only by breaking this dichotomy, connecting knowledge of risk to power and to identity, can we break free of risk’s iron cage. Risk Assessment as Rationalization Let us inspect these two rationalizations of risk more closely, first through a critique of the technical adequacy of the common means of risk assessment and second through a critique of the social adequacy of the common means of risk communication. Risk, we will show, is neither rational on its own logical grounds nor on our own democratic grounds. Here we will draw with abandon on the work of others, especially Schrader-Frechette (1991) and Wargo (1996), as well as further developing these often-neglected critiques. To begin the critique of risk assessment, we must note the obvious distinction between precision and accuracy. Commonly, the language of risk is extremely precise. The numbers cited are extraordinarily small, suggesting analytic precision in being able to make such distinctions: pollution concentrations in parts per billion, so many chances of cancer per million, and the like. In fact, these numbers are so small they become meaningless to most people, and risk communication specialists often re-represent them in more folksy terms such as one drop in an Olympic-size swimming pool. These extremely precise numbers, however, are of course based on a whole series of assumptions, guesses, and extrapolations that limit their accuracy. Much of the protocol of risk assessment is in fact the codification of these necessary assumptions, guesses, and extrapolations. Researchers assume that laboratory animals, under compressed experimental conditions, will have the same physiological responses to "risk factors" as humans. Researchers make crude guesses about probable exposure to people, and, because data collection is very expensive, they extrapolate from the smallest amount of data they can justify. To compensate, regulators commonly add in safety factors of ten. That is, knowing their numbers are based on crude guesses, they decide to err on the safe side by making their regulatory standard ten times stricter than the risk calculation would suggest. Despite these assumptions, extrapolations, and safety factors, the language of risk remains misleadingly precise, full of pronouncements such as "a concentration of 0.5 mg/kg will result in an increased cancer risk less than 1x10-6." This preciseness of the language can easily divert our attention from what may be the wild inaccuracy of the findings. Another widely recognized problem with risk calculations is the practice of considering each risk factor separately (Wargo, 1996). The scientific method relies on examining one variable at a time. So, for example, in order to assess the risk of ground water pollution from a landfill, researchers use thresholds set for each individual chemical (and there may be hundreds). But they do not calculate the risk of these chemicals in combination with each other–let alone take into consideration the effect of potential risk factors from outside the landfill. It is already very expensive to do a very limited one-at-a-time risk assessment. Calculation of cumulative risk factors is simply not feasible with current methods. But the real world is not a one-risk-a-time laboratory. Moreover, there is a false discreteness in risk calculations of the one-cancer-in-a-million sort, for it implies that only that one-person-in-a-million will be affected by the particular risk under consideration. But there may well be effects below level of the threshold of an actual case of cancer. This is the problem Schrader-Frechette (1991: 70) calls "the contributors dilemma." In combination with sub-threshold effects from other sources, cancer rates may indeed rise above one-in-a-million. But the discreteness of risk calculations in combination with the separatism of risk calculations obscure our recognition of this unhappy possibility. The numerical character of the language of risk has other limitations as well. We use concepts of risk to help us make decisions, but there is much more going on in our decisions than numbers can convey. Say a community is deciding among several waste disposal options. They do a risk assessment for two different landfill locations and, for good measure, they do a risk assessment for a waste-to-energy plant as well. But a community cannot just take the results of the studies and pick the option with the lowest risk, even assuming they have confidence in the numbers. What about the fact that the site east of town, while the safest in terms of geology, will disproportionately affect the Latino community? What about traffic, noise, and smell, none of which affect the cancer risk, but all of which are legitimate considerations for the community? What about cost? What about the fact that none of the options assessed included a strict recycling and hazardous materials collection program, because the technocrats just didn’t think people would do that stuff? But because it is so costly to do risk assessments and because numbers carry connotations of universality and objectivity, there is often a tendency for risk numbers to carry disproportionate weight in decision-making. Moreover, there is a temptation to put all those other factors in decision making into numerical language so as to make them apparently comparable. Indeed, there is often a temptation to put those factors into the numerical language of risk–to turn risk into a single, universally applicable scale of comparability–as in the risk of increased smell or noise. Risk here plays much the same social role that Simmel (1990 [1900]: 121) described for money: "it not only establishes a relationship to all kinds of concrete values, but can also indicate relations among value quantities," however illusory those relations may be. Thus, with our landfill siting example, one can quantify and compare the cost of siting the landfill east of town with that of siting it to the south. And one can quantify and compare the risk assessments for both sites. But it is meaningless to compare the health risks of one site to, say, the monetary cost of the other. As well, there are some things that simply do not lend themselves to being measured by numbers– whether in terms of risk or money or some other scale of comparability–difficult as our rationalistic culture may find that to accept. Further, as Wargo (1996) and others have pointed out, risk assessments falsely homogenize populations. For example, children have different diets, physiology, and behavior than adults and thus calculations of risk from pesticide residue on foods for adults will not apply to children. Clearly, all sorts of other groups will also have different risks than the mythical average population, yet calculating these different risks is well beyond the capabilities and budgets of the democratic process. In other words, the technical is the political. We said in the introduction that the growth of a rationalistic conception of risk was associated with the rise of democracy. And to be sure, risk assessment has been introduced into the political process in part to limit the ability of corporations, and other entities, to endanger people with pollution or poor working conditions without check or explanation. Democratically inspired regulatory agencies require risk assessments in order to manage and limit the damage industrialism does to society. Moreover, the claim of rationalism to represent all and apply to all–a nameless truth, unattached to particular power interests–is in part heartfelt on the part of those who advance rationalistic arguments. But though its growth may have been fostered by democratic impulses, the language of risk is anything but democratic. First, the false accuracy, false separatism, false discreteness, false significance, and false homogenization of rationalistic risk open up the potential for much political mischief. And it can be very hard to critique this mischief. Risk assessment, as we have already mentioned several times, is very expensive. The data collection is particularly costly, as it requires a great deal of highly trained (and highly paid) labor and expensive equipment in constant need of updating. Government pays for some of it, but the vast majority is paid for by those generating the risks, in most cases corporate interests. Moreover, and particularly in recent times, corporate interests are often able to generate sufficient political pressure to guide much of the direction governmentsponsored research takes. Corporate interests are also able to guide the course of much of academic research through grants to universities, a practice that is becoming increasingly common. The result is that the language of risk is largely owned by the wealthy. Critics generally do not have the time, the money, and the machines to perform or to hire comparable studies that would stand up equally well in political debates carried out in the hierarchical language of risk. The cost of data collection is why critiques of risk numbers are generally limited to criticisms of the methods used or the conclusions drawn, and seldom based on counter-studies. But even criticisms of methods and conclusions require considerable technical expertise (which is not cheap) and are in that sense undemocratic. Further, rationalistic risk assessment tends to divide people into the affected and the unaffected, breeding complacency. Calculations like one-in-million encourage people to think about risk individually, in terms of what is the risk for me. As Wargo (1996) observes, most risk assessment is based on a utilitarian conception of democracy and justice–the greatest good for the greatest number. And as long as I am in that greatest number, why should I be concerned? If the risk is low for me and my community, why should I worry? Or even if I do worry, should I marshal my limited resources for political engagement–my time and my money and my political contacts–for somebody else’s problem when I have more than enough problems of my own already? Risk thus becomes yet another manifestation of economistic egoism. In other words, not only is access to the language of risk difficult, but also risk enervates criticism through the age-old tactic of divideand-conquer. Risk Communication as Rationalization As we mentioned in the introduction, the language of risk remains the most politically legitimate way for us to discuss and debate life’s dangers and uncertainties. But despite our deference to science, the numbers do not speak for themselves. Most people do not need to read academic articles to sense the objections to risk assessment mentioned above. When told that a proposed hazardous waste facility or nuclear power plant or whatever poses a cancer or birth defect or other risk of less than one in a million, communities do not tend to respond with "Oh, that’s all right then." The citizens may respect science. They may not have the resources to challenge the risk assessment on its own terms. They may not even be able to express persuasively why a low risk finding is not particularly reassuring, but many individuals and communities will go on opposing whatever it is they are concerned about. This response frustrates the specialists and corporate interests who have played by the rules laid down by government and spent a great deal of money to produce risk assessments. And so the field of risk communication is born. At its best, risk communication tries to explain why the numbers are not enough and to propose models for dialogue between stakeholders. But at its worst, risk communication is a thinly disguised form of public relations. Indeed, much of the risk communication literature appears in journals and book series dedicated to public relations. Sadly, most of the practical applications of risk communication lean towards the public relations side. The public relations interpretation of risk communication is encouraged by the undemocratic implications of the rationalistic language of risk. We see six common flaws in the risk communication literature that reveal this persistent undemocratic bias. These flaws are most glaring in the earlier literature, but they persist in contemporary discussions. First, most risk communication efforts proceed from the premise that the experts have the right answer. Recent research by Vincent Covello, a prominent academic figure in risk communication, illustrates this one-sided view. Covello (1996) studied the success of a three-week long "Environmental Health Risk Module" in educating high school students to agree with the "correct" answers to a series of statements about risk. In Covello's own words, the goal of the study was "to test the ability of instructional materials to improve student's knowledge and understanding of principles of environmental health risk assessment" (Covello, 1996: 83, our emphasis). Covello did a preand post-test of students’ attitudes toward risk after taking the module. Students were asked to score the degree of their agreement with statements like "I am very concerned about health risks due to chemicals introduced into the environment by industry" and "If pesticide residues are found in food at levels substantially below the legal limits, the food is safe to eat." Disappointingly, Covello noted, the module produced only a slight decline in the students’ agreement with the first statement, but encouragingly produced a great increase in students’ agreement with the second statement. Overall, Covello (1996: 85) was pleased enough with the results to report that "a short intervention can lead to outcomes that are both statistically significant and educationally meaningful." Second, there is an implicit corollary to the notion that the experts are right which pervades the risk communication literature: the public is irrational. In the 1980s, psychologist Peter Sandman (1989) proposed characterizing public objections to risk as dependent in large measure on "outrage factors" such as perceptions of the fairness, moral relevance, dreadedness, and voluntariness of the risk. This language rapidly became popular, and remains popular, in corporate risk handbooks and training videos (c.f. Lerbinger, 1997; Lundgren, 1994 [1998]; Covello, Sandman, and Slovic, circa 1993). One such training video, a taped course presented by Sandman himself, is provocatively entitled "Risk=Hazard+Outrage...A Formula for Effective Risk Communication. Under this formula, experts are depicted as dispassionate and the public as emotional. As Lundgren (1994 [1998]), an advocate of the Sandman approach, describes the formula, "the audience's view of risk (as opposed to that of the experts assessing the risk) reflects not just the danger of the action (hazard) but also how they feel about the action and, even more important, how angry they feel about the action (their outrage)." Experts evidently do not have feelings and are never given to anger in their interactions with the public–a dubious interpretation, in our view, of what goes on in the heated atmosphere of a public hearing. This characterization of the public as irrational is not limited to Sandman and his supporters. Another author (Lukaszewski, 1992), a CEO of a mid-sized company, has discussed the importance of "managing fear" and of "finding a way to help the public manage its growing anguish and still accomplish our business objectives." And in the lead chapter of the EPA’s Handbook for Environmental Risk Decision Making (Cooper, 1996: 9), we read that "The bottom line is that our society is like a bunch of spoiled brats who want an affluent lifestyle based on a throw-away society, supplied by synthetic chemistry and risk free at the same time." Here the charge is not that emotion makes the public irrational; rather, greed blinds the public to the realities of life. Others such as Buruch Fischhoff emphasize the importance of recognizing the "mental models" that the public uses to process information about risk (Fischhoff, 1996), yet suggest no parallel emphasis on the mental models experts use to process information about risk. Third, a corollary of both of the above is that the purpose of risk communication is to find the appropriate strategies for convincing an irrational public that the experts are right. One of the principal topics of the risk communication literature is the identification of the sources of the public's resistance to certain risk communication techniques–what Lundgren (1994 [1998]) blandly calls "constraints to effective risk communication" in a chapter on the subject–and how to overcome them. There is rarely any discussion in the literature of whether the resistance is valid. The message, in the main, is: some techniques do not work, so you had better try something else if you want to achieve "effective risk communication." For example, some risk communication literature warns against the use of certain risk comparisons because they are less acceptable to the public. A training manual prepared by Covello, Sandman, and Paul Slovic for Dow Chemical instructs that "it is possible to rank different kinds of risk comparisons in terms of their acceptability to people in the community" (Covello, Sandman, and Slovic, circa 1993: 51). The manual distinguishes between five different ranks of risk comparisons. Three forms of comparisons are all ranked "most acceptable"– "comparisons of the same risk at two different times," "comparisons with a standard," and "comparisons with different estimates of the same risk." In the second rank are "less desirable" forms of comparison, such as "comparisons of the risk of doing something versus not doing it" and "comparisons of alternative solutions to the same problem." The third rank contains "even less desirable" forms; the fourth, "marginally acceptable" forms. At the bottom are those forms described as "rarely acceptable, use with extreme caution" such as "comparisons of two or more completely unrelated risks" and "comparisons of an involuntary risk to a voluntary risk." In making choices between the various ranks of comparisons, the manual advises: The general rule of thumb is: Select from the highest-ranking risk comparisons whenever possible. When you have no choice but to use a low-ranking risk comparison, do so cautiously, being aware that the risk comparison could well backfire....Used with awareness and understanding, risk comparisons can be a valuable and useful tool in risk communication. You need to make sure, however, that they are tailored to your audience and their expectations. In other words, the main issue in risk communication is evidently not whether or not the public is right to regard certain forms of comparison as spurious. Rather, it is that the risk communicator needs to "tailor" risk comparisons to the audience for otherwise the communicator's presentation could "backfire." Lundgren (1994 [2nd ed., 1998]: 60) further suggests that risk communicators avoid presenting the results of "too many studies or contradictory studies" to the public: "If the results vary widely, you are reinforcing recognition of the uncertainties involved." Rather than doing something that will "confuse or alienate your audience (‘I knew it–these scientists will say anything!’), you may want to try another way of comparing the risk," Lundgren advises. Whatever the intent of the extensive research base in risk communication, by the time it reaches the industry handbook, the training manual, and the training video, it has been reduced to guidelines for the informed propagandist. Fourth, there is a persistent myth that the opposition is between the experts and the public. Opponents of a particular risk assessment are characterized as lay people, proponents as sound scientists. In fact, in almost every case, scientists are on both sides of the argument, as are nonscientists. One case frequently cited in risk communication lore is the story of the precipitous drop in apple consumption following television coverage of the possible carcinogenicity of Alar and its use on apples. Of course, it was scientists who first suggested that Alar was a human health hazard, leading the American Academy of Pediatrics in 1986 to recommend banning it and the Environmental Protection Agency in 1984 and 1987 to list it as a probable carcinogen. But the risk communication literature describes the controversy over Alar in early 1990s as a classic example of the triggering of outrage factors in the public. The focus of the literature is on how to avoid such public relations fiascoes in the future. The public was outraged, but among that outraged public were many scientists, and this is typically the case. Fifth, the risk communication literature is silent on outrage factors and the need to manage fears in industry. One could just as easily describe the apple industry’s response to the Alar controversy as hysterical as one could describe the public’s response in such terms. Repeatedly, when specific pesticides have come into question, industry has predicted crop failures and skyrocketing food prices should that pesticide be withdrawn. But from DDT to EDB, agriculture has survived the loss of these two-edged swords (Lehman, 1993). In advising risk communicators from the food industry about the importance of providing "long-term education" in the aftermath of Alar, the senior editor of the journal Food Engineering, in an article later approvingly included in yet another recent industry risk handbook (Lerbinger, 1997: 286), reasoned that "The public gets upset when it suddenly gets new information, such as that carcinogens are in foods, even though they are at levels that are not harmful. They should be told that an abundant and inexpensive food supply depends on the use of pesticides." But the food industry also gets upset when it suddenly gets new information, such as study results which indicate that potentially harmful carcinogens are being incorporated into its products. For example, in 1999 the American Farm Bureau responded to a just-released report by the Environmental Working Group on pesticide residues in American foods by describing it as "a shameless attempt to frighten parents and an arrogant power play." In the words of Dean Kleckner, the American Farm Bureau President, "it is unconscionable that 10 years after the debunked Alar scare, EWG [the Environmental Working Group] is once again foisting on the American public a report rooted in junk science and antipesticide propaganda" (Iowa Farm Bureau, 1999: 4) Whatever the merits of the Environmental Working Group's report, these statements hardly count as dispassionate and calm responses on the part of the American Farm Bureau. Meanwhile, ten years after the Alar controversy, apples remain a staple of the American diet–despite industry fears. Sixth, risk communication is treated as something to be initiated and managed by the "risk manager." Although for at least the past decade there has been emphasis in the literature on dialogue and open discussion with the public (Ahearne, 1990; Cohn, 1996; Lerbinger, 1997; Morgan, 1993), it is nevertheless in these writings nearly always the risk manager who is to initiate the discussion–usually long after the risk assessment studies are complete. And risk managers are, as Lundgren (1994 [1998]: xi) defines them, "those high-ranking individuals who actually make and implement decisions about a risk"–not the public. There are numerous handbooks on risk communication for the executive, scientist, and spokesperson, but we have not come across any risk communication handbooks for the citizen seeking to engage industry, science, and the state. Moreover, in light of the risk literature's predominant attitudes toward the public, the much discussed issue of "trust" (Covello and Peters, 1996; Freudenburg, 1996; Leiss, 1995) is likely to remain a substantial barrier to "effective risk communication"–even if that communication is conceived as a dialogue. Because the received wisdom appears to be that the experts are right, that the public is irrational, and that the wise risk manager needs to choose appropriate means of communicating the expert’s view of risk lest the public lapse into emotion, outrage, and illogic, the opportunity for public manipulation is considerable. In the case of the food industry’s response to the Alar controversy, that has meant contributing generously to industry front groups like the Hudson Institute and the American Council on Science and Health (ACSH). These groups produce their own studies, neatly packaged for the media and authored by articulate spokespeople who are quick with the memorable sound bite, such the Hudson Institute’s Dr. Dennis Avery and ACSH’s president Elizabeth Whelan. In its most extreme form, manipulative risk communication results in legal maneuvering to withhold information from the public altogether. One example of this tactic are "food libel laws," such as the Texas statute which became notorious after the unsuccessful 1998 suit brought by the Texas Cattlemen’s Association against the talk-show host Oprah Winfrey over a remark she made about "mad cow disease" and American hamburger meat. These laws, notes The Economist (1994: 28), "are intended to discourage people from making claims about produce in public based on data that are incorrect or have no scientific support" and "are a reaction to the 1989 scare over Alar." Since the public is allegedly incapable of making rational decisions about risk, the sentiment among the legislators in some states is evidently that the public should not be given the opportunity to make decisions about risk in the first place. Another trend in this direction is restrictive labeling requirements, as exemplified in the politics of recombinant bovine growth hormone or rBGH–or rbST, standing for recombinant bovine somatotrophin, as the industry prefers to call it to avoid the reference to hormones. When rBGH was allowed for use in milk production in the US, Monsanto not only opposed labeling of milk from cows treated with rBGH but even voluntary labeling of milk from cows not so treated. They based their opposition to any product labeling on the contention that the public would not understand the logic of risk assessment and would falsely assume that milk produced from cows treated with rBGH could pose health risks. This campaign was strikingly successful. Most dairies were dissuaded from even trying to distinguish between milk from cows treated with rBGH and that from cows not so treated. And the few that ventured to label the non-rBGH milk are required to couple the label with the disclaimer that "No significant difference has been shown between milk derived from rbST treated and non-rbST treated cows." Similarly, US companies have so far successfully resisted even voluntary labeling of American food products containing or not containing genetically engineered crops. They have done so on the same grounds: the public is too ignorant to understand that no ill effects from genetically engineered crops on human health have been proven and would misinterpret labeling to imply there might be ill effects. In fact, the possibility of ill effects from these substances have also not been disproven; both in the case of rBGH and genetically engineered crops there is considerable scientific debate about their safety, particularly in Canada and the European Community. Indeed, the US is the only major country which currently allows the use of rBGH (Gilbert, 1999). In short, risk communication is infected with a contempt for the public, which perpetuates its undemocratic bias and also ensures the continued failure of risk communication efforts. Risk, Social Science, and Democracy Risk is a far from neutral language. Rather than representing interest-free rationality, nameless knowledge that applies to everyone, risk represents the deeply interested knowledge of those who are able to command it. Much of the public anger and outrage that the field of risk communication has sought to belittle stems from the public’s increasing recognition of the deep interests at work in the language of risk. Much of this anger is new, though, leading to the recent development of risk communication as an academic field and corporate necessity, and it is worth considering why it is new. The not-so-subtle implication of most risk communication studies is that this anger represents some form of new mass hysteria, Le Bon's The Crowd turning toward risk. Rather, it is our interpretation that the post-World War II shine–a brief interlude of a couple of decades at most–is finally starting to wear off technology, and we are now returning to the kind of critical popular understanding of technology and the economic interests behind it that undergirded the labor movement during the great strikes of the late nineteenth and early twentieth centuries. Moreover, the increasing development of democratic institutions and democratic sensibilities has given the people the sense that they don’t have to put up with being dumped on by interests which are not their own. The reaction against risk represents democracy, not the hysteria of the ill-informed; indeed, as Clarke and Freudenburg (1993) observe, the public is often highly knowledgeable of the issues behind risk. We hope that in the foregoing we have assisted in what we see as the democratic task of naming the interests that dominate the language of risk, from the risk communicators and risk managers of industry seeking to mollify the public to the scientific risk assessors elevated by their secure feeling that they are on the side of the grail of rationality. Environmental social scientists have of late taken great interest in the study of risk. In view of risk’s undemocratic history, we see a crying need here for the social scientific point of view. Indeed, some of the few democratic voices in the various risk handbooks and academic compendiums on risk that we earlier cited are those of environmental social scientists (Clarke and Freudenburg, 1993; Freudenburg, 1996; Renn, Webler, and Kastenholz, 1996). But we also would like to urge that environmental social scientists keep their eyes wide open as they develop this new interest in the study of risk. Risk has become something of an academic bandwagon in the field and increasing numbers of environmental social scientists are running along side, grasping for an intellectual handhold so as to hike themselves over onto the wagon bed. The rewards are plain for doing so. Environmental social science has lingered on the sidelines of public debate for most of its life. It remains the common experience of nearly every environmental social scientist regularly to have to answer the question, "What does social science have to do with the environment?" And if risk has come to the center of public attention, environmental social science can too by reorienting the work of the field toward the media-star that risk has become. Besides, it looks like there is grant money to be had for research and international conferences. Contributing to public debate is a highly worthy ambition; indeed, it should be the highest ambition of any social science, in our view. But let us not let our desire to gain public interest in our work overrun our critical sensitivities. Otherwise, environmental social science could wind up promoting the acceptability of risk’s undemocratic language by uncritically placing it at the center of the discipline–particularly if the public does start paying attention. One source of our concern about critical sensitivities overrun is the celebrated work of Ulrich Beck. The attempt by environmental social scientists to find an intellectual handhold on the risk wagon has found much of its conceptual strength from Beck’s writings and his theory of the "risk society." There is much to admire in Beck’s work, and we are thankful that his writings have brought about a much-needed interest in theory to the field. But we are concerned about a number of problems in his work, most particularly his view that society is being fundamentally realigned by the rise of risk and his stunning inarticulateness on the subject of inequality, the precise problem that concerns us with the language of risk in general. Beck, as is now well-known among environmental social scientists, argues that as modern industrial society develops, its "momentum of innovation increasingly elude[s] the control and protective institutions," leading to the creation of a "risk society." As Beck (1996: 28) describes, Risk society is not an option which could be chosen or rejected in the course of political debate. It arises through the automatic operation of autonomous modernization processes which are blind and deaf to consequences and dangers. In total, and latently, these produce hazards which call into question–indeed abolish–the basis of industrial society. A major result of these autonomous processes is that class-based conflicts over the distribution of goods are being increasingly replaced or overlain (depending upon which bit of Beck one reads) by conflicts over the distribution of "bads," and by "bads" Beck means risk. And yet, risk applies to everyone such that there is in the risk society an "equality of risk." As Beck (1992 [1986]: 36) terms it in his much-repeated aphorism, "[P]overty is hierarchic, smog is democratic." The "risk society" thesis strikes us, among others in and out of the environmental social sciences, as a problematic view (Alexander, 1996; Bell, 1998; Blowers and Leroy, 1998; Buttel, 1997; Irwin and Simmons, 1998; Renn, 1997). While we agree that the current situation is characterized by extensive questioning of technological hazards, Beck’s account of the origin of this questioning smacks of technological determinism. For Beck it is not new critique which has led to the questioning of hazards, but rather new hazards which have led to critique, falsely in our view promoting the notion that life in modern societies is more hazardous and that people are more worried. We do not doubt that the existence of hazards is central to the existence of critique of them, but there seems no basis for holding that society today is any more characterized by hazards and worries than was early modernity. In this sense, as Brian Wynne (1996) points out, Bruno Latour (1992) is right: we have never been modern. The uncertainty of life today is not so very different from the past. Rather, the coming of risk, as we have argued, represents the parallel growth of rationalism and democracy, leading to a new language for debating the hazards of contemporary life. The conflict over risk is due to the undemocratic basis of its rationalizations within the context of an increasingly democratic society. And more importantly, we worry that Beck’s arguments about the equality of risk and its superseding conflict over the distribution of goods diverts social and social scientific attention from the inequalities of contemporary society. There remain, of course, sharp differences in the distribution of goods in even the wealthiest and most technological of nations. Moreover, there is substantial evidence that those with greater social power are able to ensure that there is no equality in the distribution of bads, as the environmental justice movement has made plain. In light of these continuing inequalities, we are concerned that Beck’s arguments could fit all too comfortably with the false neutrality of the technocrat’s use of the language of risk. But even if a more satisfactory theoretical handhold could be found for vaulting the environmental social sciences over the side and onto the wagon bed, we would argue against risk becoming a central theoretical construct for the field. Risk has become an enormously popular metaphor and social scientists, among others, are rushing willy-nilly to apply it to everything. Let the environmental social sciences show more discernment. Risk is a highly limited language for thinking sociologically about environmental problems. Many of the issues of environmentalism simply do not fit within risk’s rationalistic language. Species loss, landscape change, loss of place, and the spiritual and sensual dimensions of environmental experience cannot be adequately captured as cold, calculated risk. Promoting the language of risk may thus promote the sidelining of these issues. Indeed, we would argue, it is even hard to talk about more material environmental issues like climate change as "risk" because of the technical and social limits of risk’s rationalisms. As far as environmental social science is concerned, risk is a theoretical red herring. But this does not mean that risk is an unimportant topic. The idea of risk has utility in environmental decision making. It is often useful to try and quantify the likelihood of harm from the actions of corporations, governments, individuals, and even the environment itself. The attempt at objectivity can be an attempt at democracy. But it can also be an attempt on democracy. Risk inevitably opens the Pandora’s box of rationalism, and this we must be ever mindful of. As Thomas Jefferson might have said, the price of the proper use of risk is eternal vigilance. And here is where environmental social science can play an important role in the study of risk: keeping risk honest–a tool and not a smoke screen–and keeping risk in perspective as only one mode among many for dealing with the unknown. Risk, as we have said, is a rationalistic means for comprehending uncertainty. But on closer inspection, the real uncertainty at stake in the language of risk is not that of carcinogens in our food, nitrates in our water, and radioactivity in our air. Rather, the real uncertainty at stake in the language of risk is the relationship between power and democracy. Environmental social scientists, if they are concerned about the current state of that relationship, should direct their work at critiquing the language of risk, not promoting it. Only in this way can the environmental social sciences assist in the important task of releasing us all from the iron cage of risk.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reducing the Risk of Rail Transport of Hazardous Materials by Route Rationalization

Hazardous materials traffic originates and terminates at numerous different locations throughout the North American railroad network. Rerouting of this traffic, especially Toxic Inhalation Hazard materials, away from populated areas has received considerable attention in recent years as a means of reducing risk. However, rerouting on a route specific basis is neither simple nor necessarily effe...

متن کامل

Rationalization of Physicochemical and Structural Requirement of Some Substituted 5-(Biphenyl-4-ylmethyl)Pyrazole as Angiotensin II Receptor Antagonist: A QSAR Approach

      A series of angiotensin II (A II) receptor antagonist of some substituted 5-(biphenyl-4-ylmethyl) pyrazole were subjected to QSAR analysis using Hansch and Fujita-Ban model, by using combination of thermodynamic, electronic, spatial descriptor and presence or absence of substituent respectively. Several QSAR model were obtained using stepwise regression analysis. Two models from both the ...

متن کامل

A Study on the Effective factors in forming Social Identity among high-schools students

This paper tries to study the effective factors involving in social identity among high school students in Amol (Northern Iran) from three different viewpoints: Religion, family and national ones. This research is done by survey and questionnaire method. The sample were 372 Boys and girls. The findings show variables like self satisfaction, rationalization of affairs, and self-steastm influen...

متن کامل

Policy Uncertainty and Technological Innovation ^ ' ^ ALFRED

Without certainty about government policies, business decision makers are unable to assess risk and opportunity and make the trade-offs necessary for investment in new technologies. Different policies (R&D, health and safety, economic regulation) have different effects, depending on type of industry and size of firm. Because there are no established standards for judging industry performance, i...

متن کامل

Moral Rationalization and the Integration of Situational Factors and Psychological Processes in Immoral Behavior

Moral rationalization is an individual’s ability to reinterpret his or her immoral actions as, in fact, moral. It arises out of a conflict of motivations and a need to see the self as moral. This article presents a model of evil behavior demonstrating how situational factors that obscure moral relevance can interact with moral rationalization and lead to a violation of moral principles. Concept...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002